ai ethics team
Microsoft lays off AI ethics team
Microsoft has laid off a team dedicated to ensuring the responsible development and deployment of AI. Platformer reports the ethics and society team were laid off as part of wider cuts to Microsoft's workforce. However, the decision leaves Microsoft with fewer experts working to ensure solutions are safe and have a net positive impact. The perception of Microsoft as an AI leader has deepened following its exclusive partnership with OpenAI. The duo continue to deliver powerful new AI capabilities across Microsoft's products.
- North America > United States > California (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.73)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.64)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
FCAI and Women in AI Ethics team up to make AI more inclusive and ethical -- FCAI
This collaboration will increase diversity in the AI tech space where women have been historically excluded. The Finnish Center for Artificial Intelligence (FCAI) and Women in AI Ethics (WAIE) have announced their partnership to increase diversity and ethics in AI. Both organizations focus on empowering people to solve real-life problems through AI that's designed and deployed in an ethical and trustworthy manner. This alliance will increase representation of diverse voices from the Nordic region in the WAIE directory, an online resource to help recruiters and event organizers find diverse talent, and open up opportunities for these talented women to be recognized through WAIE's highly regarded annual "100 Brilliant Women in AI Ethics" list. This international collaboration will include co-hosting of educational workshops by diverse leaders in the AI ethics space, offering a critical lens for real-world solutions. It will also provide a virtuous cycle of mentorship opportunities for FCAI's faculty as well as students and WAIE's rich network of women as well as non-binary folks who are in all stages of their AI ethics careers to inspire others from marginalized communities to join this important space.
Are AI ethics teams doomed to be a facade? Women who pioneered them weigh in
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. The concept of "ethical AI" hardly existed just a few years ago, but times have changed. After countless discoveries of AI systems causing real-world harm and a slew of professionals ringing the alarm, tech companies now know that all eyes -- from customers to regulators -- are on their AI. They also know this is something they need to have an answer for. That answer, in many cases, has been to establish in-house AI ethics teams.
Google says it's committed to ethical AI research. Its ethical AI team isn't so sure.
Six months after star AI ethics researcher Timnit Gebru said Google fired her over an academic paper scrutinizing a technology that powers some of the company's key products, the company says it's still deeply committed to ethical AI research. It promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects. Jeff Dean, the company's head of AI, said in May that while the controversy surrounding Gebru's departure was a "reputational hit," it's time to move on. But some current members of Google's tightly knit ethical AI group told Recode the reality is different from the one Google executives are publicly presenting. The 10-person group, which studies how artificial intelligence impacts society, is a subdivision of Google's broader new responsible AI organization.
What is Sustainable Artificial Intelligence?
Both'sustainability' and'artificial intelligence' can be hard concepts to grapple with. I do not believe I can pin down two incredibly complex terms in one article. Rather I think of this more as a short exploration of different ways to define sustainable artificial intelligence (AI). If you have comments or thoughts they would be very much appreciated. These thoughts come after a discussion on Sustainable AI I moderated on the 21st of May as part of my role at the Norwegian Artificial Intelligence Research Consortium.
US Military Seeks to Speed AI Adoption for Support Systems - AI Trends
The US military needs to scale up its use of AI or be left behind by adversaries, Lt. Gen. Michael Groen, chief of the Pentagon's Joint AI Center (JAIC), told a recent conference of the National Defense Industrial Association, according to a report from UPI. While current military use of AI "is a step in the right direction, we need to start building on it," stated Groen, who was appointed head of the JAIC in October. He is the second director of JAIC, or "the jake" in Pentagon parlance, which was set up by Congress in 2018. The first director was Air Force Lt. Gen. John N.T. "Jack" Shanahan, who retired last year. Noting that China has said it intends "to be dominant in AI by 2030," the Pentagon has focused on a five-year program culminating in 2027.
- North America > United States (1.00)
- Asia > China (0.25)
- Oceania > Australia (0.05)
- (10 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Air Force (1.00)
'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement
Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.
- North America > Canada > Ontario > Toronto (0.15)
- North America > United States > Texas > Taylor County (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Law (1.00)
- Health & Medicine (0.95)
- Education (0.94)
- (3 more...)
Google fires Margaret Mitchell, another top researcher on its AI ethics team
Google has fired one of its top artificial intelligence researchers, Margaret Mitchell, escalating internal turmoil at the company following the departure of Timnit Gebru, another leading figure on Google's AI ethics team. Mitchell, who announced her firing on Twitter, did not immediately respond to requests for comment. In a statement to Reuters, Google said the firing followed a weeks-long investigation that found she moved electronic files outside the company. Google said Mitchell violated the company's code of conduct and security policies. Google's ethics in artificial intelligence research unit has been under scrutiny since December's dismissal of Gebru, a prominent Black researcher in Silicon Valley.
AI Ethics: A Self Reflection
I have been a data analytics professional for the past twelve years. Throughout my career, I have seen a steady spike in the use of data across the industry, be it engineering, education, healthcare or financial services. It was in 2017 when I read about the Economist article "The world's most valuable resource is no longer oil, but data" an idea which was first coined by Clive Humby, UK Mathematician and architect of Tesco's Clubcard in 2006. Many prominent personalities like Meglena Kuneva, European Consumer Commissioner, 2009 [1] later reiterated this. I could see everyone talking about the infinite potential of data and how to use it in a million ways.
- Education > Curriculum > Subject-Specific Education (0.56)
- Government (0.51)
- Banking & Finance (0.36)